ethical concern
Neglected Risks: The Disturbing Reality of Children's Images in Datasets and the Urgent Call for Accountability
Caetano, Carlos, Santos, Gabriel O. dos, Petrucci, Caio, Barros, Artur, Laranjeira, Camila, Ribeiro, Leo S. F., de Mendonça, Júlia F., Santos, Jefersson A. dos, Avila, Sandra
Including children's images in datasets has raised ethical concerns, particularly regarding privacy, consent, data protection, and accountability. These datasets, often built by scraping publicly available images from the Internet, can expose children to risks such as exploitation, profiling, and tracking. Despite the growing recognition of these issues, approaches for addressing them remain limited. We explore the ethical implications of using children's images in AI datasets and propose a pipeline to detect and remove such images. As a use case, we built the pipeline on a Vision-Language Model under the Visual Question Answering task and tested it on the #PraCegoVer dataset. We also evaluate the pipeline on a subset of 100,000 images from the Open Images V7 dataset to assess its effectiveness in detecting and removing images of children. The pipeline serves as a baseline for future research, providing a starting point for more comprehensive tools and methodologies. While we leverage existing models trained on potentially problematic data, our goal is to expose and address this issue. We do not advocate for training or deploying such models, but instead call for urgent community reflection and action to protect children's rights. Ultimately, we aim to encourage the research community to exercise - more than an additional - care in creating new datasets and to inspire the development of tools to protect the fundamental rights of vulnerable groups, particularly children.
- South America > Brazil > São Paulo > Campinas (0.04)
- Oceania > Australia (0.04)
- South America > Brazil > Minas Gerais > Belo Horizonte (0.04)
- (3 more...)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (0.93)
"I Hadn't Thought About That": Creators of Human-like AI Weigh in on Ethics And Neurodivergence
Rizvi, Naba, Smith, Taggert, Vidyala, Tanvi, Bolds, Mya, Strickland, Harper, Begel, Andrew, Williams, Rua, Munyaka, Imani
Human-like AI agents such as robots and chatbots are becoming increasingly popular, but they present a variety of ethical concerns. The first concern is in how we define humanness, and how our definition impacts communities historically dehumanized by scientific research. Autistic people in particular have been dehumanized by being compared to robots, making it even more important to ensure this marginalization is not reproduced by AI that may promote neuronormative social behaviors. Second, the ubiquitous use of these agents raises concerns surrounding model biases and accessibility. In our work, we investigate the experiences of the people who build and design these technologies to gain insights into their understanding and acceptance of neurodivergence, and the challenges in making their work more accessible to users with diverse needs. Even though neurodivergent individuals are often marginalized for their unique communication styles, nearly all participants overlooked the conclusions their end-users and other AI system makers may draw about communication norms from the implementation and interpretation of humanness applied in participants' work. This highlights a major gap in their broader ethical considerations, compounded by some participants' neuronormative assumptions about the behaviors and traits that distinguish "humans" from "bots" and the replication of these assumptions in their work. We examine the impact this may have on autism inclusion in society and provide recommendations for additional systemic changes towards more ethical research directions.
- Asia > Middle East (0.15)
- Africa > Middle East (0.14)
- Europe > Middle East (0.14)
- (15 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Questionnaire & Opinion Survey (1.00)
- Personal > Interview (1.00)
- Health & Medicine > Therapeutic Area > Neurology > Autism (0.76)
- Government > Regional Government > North America Government > United States Government (0.46)
Understanding University Students' Use of Generative AI: The Roles of Demographics and Personality Traits
Deng, Newnew, Liu, Edward Jiusi, Zhai, Xiaoming
The use of generative AI (GAI) among university students is rapidly increasing, yet empirical research on students' GAI use and the factors influencing it remains limited. To address this gap, we surveyed 363 undergraduate and graduate students in the United States, examining their GAI usage and how it relates to demographic variables and personality traits based on the Big Five model (i.e., extraversion, agreeableness, conscientiousness, and emotional stability, and intellect/imagination). Our findings reveal: (a) Students in higher academic years are more inclined to use GAI and prefer it over traditional resources. (b) Non-native English speakers use and adopt GAI more readily than native speakers. (c) Compared to White, Asian students report higher GAI usage, perceive greater academic benefits, and express a stronger preference for it. Similarly, Black students report a more positive impact of GAI on their academic performance. Personality traits also play a significant role in shaping perceptions and usage of GAI. After controlling demographic factors, we found that personality still significantly predicts GAI use and attitudes: (a) Students with higher conscientiousness use GAI less. (b) Students who are higher in agreeableness perceive a less positive impact of GAI on academic performance and express more ethical concerns about using it for academic work. (c) Students with higher emotional stability report a more positive impact of GAI on learning and fewer concerns about its academic use. (d) Students with higher extraversion show a stronger preference for GAI over traditional resources. (e) Students with higher intellect/imagination tend to prefer traditional resources. These insights highlight the need for universities to provide personalized guidance to ensure students use GAI effectively, ethically, and equitably in their academic pursuits.
- North America > United States > Georgia > Clarke County > Athens (0.14)
- North America > United States > New York (0.04)
- North America > United States > Indiana > Tippecanoe County > West Lafayette (0.04)
- (2 more...)
Clones in the Machine: A Feminist Critique of Agency in Digital Cloning
This paper critiques digital cloning in academic research, highlighting how it exemplifies AI solutionism. Digital clones, which replicate user data to simulate behavior, are often seen as scalable tools for behavioral insights. However, this framing obscures ethical concerns around consent, agency, and representation. Drawing on feminist theories of agency, the paper argues that digital cloning oversimplifies human complexity and risks perpetuating systemic biases. To address these issues, it proposes decentralized data repositories and dynamic consent models, promoting ethical, context-aware AI practices that challenge the reductionist logic of AI solutionism
- Europe > Netherlands > North Holland > Amsterdam (0.41)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Hawaii > Honolulu County > Honolulu (0.04)
- Information Technology > Security & Privacy (0.91)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.48)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.48)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.48)
A Qualitative Study of User Perception of M365 AI Copilot
Bano, Muneera, Zowghi, Didar, Whittle, Jon, Zhu, Liming, Reeson, Andrew, Martin, Rob, Parson, Jen
Adopting AI copilots in professional workflows presents opportunities for enhanced productivity, efficiency, and decision making. In this paper, we present results from a six month trial of M365 Copilot conducted at our organisation in 2024. A qualitative interview study was carried out with 27 participants. The study explored user perceptions of M365 Copilot's effectiveness, productivity impact, evolving expectations, ethical concerns, and overall satisfaction. Initial enthusiasm for the tool was met with mixed post trial experiences. While some users found M365 Copilot beneficial for tasks such as email coaching, meeting summaries, and content retrieval, others reported unmet expectations in areas requiring deeper contextual understanding, reasoning, and integration with existing workflows. Ethical concerns were a recurring theme, with users highlighting issues related to data privacy, transparency, and AI bias. While M365 Copilot demonstrated value in specific operational areas, its broader impact remained constrained by usability limitations and the need for human oversight to validate AI generated outputs.
- Oceania > Australia > Australian Capital Territory > Canberra (0.04)
- North America > United States > Virginia (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Personal > Interview (0.93)
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
- Education (1.00)
- (2 more...)
Development of Application-Specific Large Language Models to Facilitate Research Ethics Review
Mann, Sebastian Porsdam, Jiehao, Joel Seah, Latham, Stephen R., Savulescu, Julian, Aboy, Mateo, Earp, Brian D.
Institutional review boards (IRBs) play a crucial role in ensuring the ethical conduct of human subjects research, but face challenges including inconsistency, delays, and inefficiencies. We propose the development and implementation of application-specific large language models (LLMs) to facilitate IRB review processes. These IRB-specific LLMs would be fine-tuned on IRB-specific literature and institutional datasets, and equipped with retrieval capabilities to access up-to-date, context-relevant information. We outline potential applications, including pre-review screening, preliminary analysis, consistency checking, and decision support. While addressing concerns about accuracy, context sensitivity, and human oversight, we acknowledge remaining challenges such as over-reliance on AI and the need for transparency. By enhancing the efficiency and quality of ethical review while maintaining human judgment in critical decisions, IRB-specific LLMs offer a promising tool to improve research oversight. We call for pilot studies to evaluate the feasibility and impact of this approach.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > Denmark > Capital Region > Copenhagen (0.04)
- (5 more...)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.88)
- Law (1.00)
- Government (0.93)
- Information Technology > Security & Privacy (0.68)
- Health & Medicine > Pharmaceuticals & Biotechnology (0.46)
Human services organizations and the responsible integration of AI: Considering ethics and contextualizing risk(s)
Perron, Brian E., Goldkind, Lauri, Qi, Zia, Victor, Bryan G.
This paper examines the responsible integration of artificial intelligence (AI) in human services organizations (HSOs), proposing a nuanced framework for evaluating AI applications across multiple dimensions of risk. The authors argue that ethical concerns about AI deployment -- including professional judgment displacement, environmental impact, model bias, and data laborer exploitation -- vary significantly based on implementation context and specific use cases. They challenge the binary view of AI adoption, demonstrating how different applications present varying levels of risk that can often be effectively managed through careful implementation strategies. The paper highlights promising solutions, such as local large language models, that can facilitate responsible AI integration while addressing common ethical concerns. The authors propose a dimensional risk assessment approach that considers factors like data sensitivity, professional oversight requirements, and potential impact on client wellbeing. They conclude by outlining a path forward that emphasizes empirical evaluation, starting with lower-risk applications and building evidence-based understanding through careful experimentation. This approach enables organizations to maintain high ethical standards while thoughtfully exploring how AI might enhance their capacity to serve clients and communities effectively.
- North America > United States > Michigan > Washtenaw County > Ann Arbor (0.14)
- Asia > Singapore (0.04)
- North America > United States > New York (0.04)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- (3 more...)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Applied AI (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.69)
Ethical Concerns of Generative AI and Mitigation Strategies: A Systematic Mapping Study
Huang, Yutan, Arora, Chetan, Houng, Wen Cheng, Kanij, Tanjila, Madulgalla, Anuradha, Grundy, John
The evolution of Generative AI, particularly Large Language Models (LLMs), has seen remarkable advancements since 2020 with the introduction of models like Chat-GPT and Bard. LLMs have revolutionized tasks, such as writing assistance, code generation, and customer support automation, by leveraging vast amounts of data to generate coherent and contextually relevant natural language (NL) responses [1, 2]. As a subset of Generative AI--systems designed to create new content--LLMs go beyond traditional AI techniques, which focus primarily on analyzing existing data. LLMs, in contrast, are capable of generating text, images, and music that mimic human creativity [3]. This capability is powered by advancements in neural network architectures, especially transformers, which enable LLMs to learn the nuances of human language and produce semantically accurate content [4].
- North America > United States (0.28)
- Oceania > Australia > Victoria (0.04)
- Asia > China (0.04)
- (3 more...)
- Research Report > New Finding (1.00)
- Overview (0.93)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- (3 more...)
LLM Content Moderation and User Satisfaction: Evidence from Response Refusals in Chatbot Arena
LLM safety and ethical alignment are widely discussed, but the impact of content moderation on user satisfaction remains underexplored. To address this, we analyze nearly 50,000 Chatbot Arena response - pairs using a novel fine - tuned RoBER T a model, that we trained on hand - labeled data to disentangle refusals due to ethical concerns from other refusals due to technical disabilities or lack of information. Our findings reveal a significant refusal penalty on content moderation, with users choosing ethical - based refusals roughly one - fourth as often as their preferred LLM response compared to standard responses . However, the context and phrasing play critical roles: refusals on highly sensitive prompts, such as illegal content, achieve higher win rates than less sensitive ethical concerns, and longer responses closely aligned with the prompt perform better. These results emphasize the need for nuanced moderation strategies that balance ethical safeguards with user satisfaction. Moreover, we find that the refusal penalty is notably lower in evaluations using the LLM - as - a - Judge method, highlighting discrepancies be tween user and automated assessments. Trigger Warning and Disclaimer: This paper discusses content moderation in LLMs, including sensitive topics such as hate speech, harassment, and illegal activities, as part of an analysis of LLM performance and user satisfaction. The study does not endorse or promote any harmful, illegal, or unethical content, nor does it make any normative judgments about the " right amount " of content moderation.
- Europe > France (0.04)
- North America > Canada (0.04)
- Europe > Switzerland (0.04)
- (2 more...)
Harnessing Large Language Models for Mental Health: Opportunities, Challenges, and Ethical Considerations
Large Language Models (LLMs) are transforming mental health care by enhancing accessibility, personalization, and efficiency in therapeutic interventions. These AI-driven tools empower mental health professionals with real-time support, improved data integration, and the ability to encourage care-seeking behaviors, particularly in underserved communities. By harnessing LLMs, practitioners can deliver more empathetic, tailored, and effective support, addressing longstanding gaps in mental health service provision. However, their implementation comes with significant challenges and ethical concerns. Performance limitations, data privacy risks, biased outputs, and the potential for generating misleading information underscore the critical need for stringent ethical guidelines and robust evaluation mechanisms. The sensitive nature of mental health data further necessitates meticulous safeguards to protect patient rights and ensure equitable access to AI-driven care. Proponents argue that LLMs have the potential to democratize mental health resources, while critics warn of risks such as misuse and the diminishment of human connection in therapy. Achieving a balance between innovation and ethical responsibility is imperative. This paper examines the transformative potential of LLMs in mental health care, highlights the associated technical and ethical complexities, and advocates for a collaborative, multidisciplinary approach to ensure these advancements align with the goal of providing compassionate, equitable, and effective mental health support.
- Europe > United Kingdom > England > Dorset > Bournemouth (0.05)
- Africa > Zambia > Southern Province > Choma (0.04)